413 research outputs found

    On board sampling of the rockfish and lingcod commerical passenger fishing vessel industry in northern and central California, May 1987 to December 1991

    Get PDF
    From May 1987 to June 1990 and from August to December 1991 Fishery Technicians sampled catches on board 690 Commercial Passenger Fishing Vessel (CPFV) trips targeting rockfish and lingcod from the general port areas of Fort Bragg, Bodega Bay, San Francisco, Monterey, and Morro Bay. Data are presented for species composition by port area, year, and month, for catch-per-unit-effort, mean length, and length frequency of lingcod and the 18 most frequently observed rockfish species, and for trends in fishing effort related to fishing time, depth, and distance from port. Total catch estimates are presented based on unadjusted logbook records, logbook records adjusted by sampling data and compliance rates, and effort data from a marine recreational fishing statistics survey. Average catch of kept fish per angler day was 11.8 and average catch of kept fish per angler hour was 3.7. A trend of an increasing frequency of trips to deep (>40 fm) locations was observed in the Bodega Bay, San Francisco, and Monterey areas from 1988 to 1990-91. No trend was evident relative to trip frequency and distance from port. A total of 74 species was observed caught during the study. Rockfishes comprised 88.5% to 97.9% by number of the observed catch by port area. The five most frequently observed species were chilipepper, blue, yellowtail, and widow rockfishes, and bocaccio, with lingcod ranking seventh. In general, mean length and catch-per-angler-hour of sport fishes caught by CPFV anglers varied considerably and did not show steady declines during the study period. However, port-specific areas of major concern were identified for chilipepper, lingcod, and black rockfish, and to a lesser extent brown, canary, vermilion, yelloweye, olive, and widow rockfish. These areas of concern included steadily declining catch rate, steadily declining mean length, and a high percentage of sexually immature fish in the sampled-catch. Recent sampling of the commercial hook-and-line fishery in northern and central California indicated that most species of rockfishes taken by CPFV anglers are also harvested commercially. (261pp.

    Conceptualizing human resilience in the face of the global epidemiology of cyber attacks

    Get PDF
    Computer security is a complex global phenomenon where different populations interact, and the infection of one person creates risk for another. Given the dynamics and scope of cyber campaigns, studies of local resilience without reference to global populations are inadequate. In this paper we describe a set of minimal requirements for implementing a global epidemiological infrastructure to understand and respond to large-scale computer security outbreaks. We enumerate the relevant dimensions, the applicable measurement tools, and define a systematic approach to evaluate cyber security resilience. From the experience in conceptualizing and designing a cross-national coordinated phishing resilience evaluation we describe the cultural, logistic, and regulatory challenges to this proposed public health approach to global computer assault resilience. We conclude that mechanisms for systematic evaluations of global attacks and the resilience against those attacks exist. Coordinated global science is needed to address organised global ecrime

    Why do users trust the wrong messages? A behavioural model of phishing

    Get PDF
    Given the rise of phishing over the past 5 years, a recurring question is why users continue to fall for these scams? Various technical countermeasures have been proposed to try and counter phishing, and none have yet comprehensively succeeded in preventing users from becoming victims. This paper argues that an explicit model of user psychology is required to understand user behaviour in (a) processing phishing e-mails, (b) clicking on links to phishing websites, and (c) interacting with these websites. Many users engage in e-mail and web activity with an inappropriately high level of trust: users are constantly rewarded by their online interactions, even where there is a low level of formalised trust between the sending and receiving parties, eg, if an e-mail claims to be sent from a bank, then it must be so, even if there has been no a priori exchange of credentials mediated by a trusted third party. Previously, mathematical models have been developed to predict trust established and maintenance based on reputation scores (e.g., Tran et al [1, 2]). This paper considers two inter-related questions: (a) can we model the behaviour of users learning to trust, based on non-associative models of learning (habituation and sensitisation), and (b) can we then locate this behavioural activity in a broader psychological model with a view to identifying potential countermeasures which might circumvent learned behaviour? © 2009 Crown

    Data loss in the British government : A bounty of credentials for organised crime

    Get PDF
    Personal information stored in large government databases is a prime target for criminals because of its potential use in identity theft and associated crime, such as fraud. In 2007-2008, a number of very high-profile cases of data loss within the British Government, its departments and non-departmental bodies raised three pressing issues of public significance: (1) how broad was the loss across agencies; (2) how deep was each loss incident; and (3) what counter-measures (organisational and technical) could be put in place to prevent further loss? This paper provides a chronological review of data loss incidents, and assesses the potential to mitigate risk, given organisational structures and processes, and taking into account current government calls for further medium and long-term acquisition and storage of citizen's private data. The potential use of the "lost" credentials is discussed in the context of identity theft. © 2009 IEEE

    Malware Detection Based on Structural and Behavioural Features of API Calls

    Get PDF
    In this paper, we propose a five-step approach to detect obfuscated malware by investigating the structural and behavioural features of API calls. We have developed a fully automated system to disassemble and extract API call features effectively from executables. Using n-gram statistical analysis of binary content, we are able to classify if an executable file is malicious or benign. Our experimental results with a dataset of 242 malwares and 72 benign files have shown a promising accuracy of 96.5% for the unigram model. We also provide a preliminary analysis by our approach using support vector machine (SVM) and by varying n-values from 1 to 5, we have analysed the performance that include accuracy, false positives and false negatives. By applying SVM, we propose to train the classifier and derive an optimum n-gram model for detecting both known and unknown malware efficiently

    Students perceptions of cheating in online business courses

    Get PDF
    Accounting majors enrolled in business courses at two different universities were asked to complete a survey questionnaire pertaining to cheating in online business courses. Specifically, students majoring in Accounting were asked about their awareness online business courses as well as their opinions regarding the credibility of online courses and the effectiveness of different techniques that may be used to prevent cheating. Forty-six percent of students indicated that they had knowledge of students receiving help with an online exam/quiz. Overall, 75 percent of respondents indicated that the most effective technique to prevent cheating on online exams/quizzes is the use of random question generation so every exam is uniquely different. Forty-two percent of respondents disagreed with the statement “Online courses are less credible than traditional courses.” While the potential for cheating in online courses seems to be well perceived, the perception of actual cheating in online courses seems to vary considerable among the students covered in this study

    Determining provenance in phishing websites using automated conceptual analysis

    Get PDF
    Phishing is a form of online fraud with drastic consequences for the victims and institutions being defrauded. A phishing attack tries to create a believable environment for the intended victim to enter their confidential data such that the attacker can use or sell this information later. In order to apprehend phishers, law enforcement agencies need automated systems capable of tracking the size and scope of phishing attacks, in order to more wisely use their resources shutting down the major players, rather then wasting resources stopping smaller operations. In order to develop these systems, phishing attacks need to be clustered by provenance in a way that adequately profiles these evolving attackers. The research presented in this paper looks at the viability of using automated conceptual analysis through cluster analysis techniques on phishing websites, with the aim of determining provenance of these phishing attacks. Conceptual analysis is performed on the source code of the websites, rather than the final text that is displayed to the user, eliminating problems with rendering obfuscation and increasing the distinctiveness brought about by differences in coding styles of the phishers. By using cluster analysis algorithms, distinguishing factors between groups of phishing websites can be obtained. The results indicate that it is difficult to separate websites by provenance without also separating by intent, by looking at the phishing websites alone. Instead, the methods discussed in this paper should form part of a larger system that uses more information about the phishing attacks

    AI Potentiality and Awareness: A Position Paper from the Perspective of Human-AI Teaming in Cybersecurity

    Full text link
    This position paper explores the broad landscape of AI potentiality in the context of cybersecurity, with a particular emphasis on its possible risk factors with awareness, which can be managed by incorporating human experts in the loop, i.e., "Human-AI" teaming. As artificial intelligence (AI) technologies advance, they will provide unparalleled opportunities for attack identification, incident response, and recovery. However, the successful deployment of AI into cybersecurity measures necessitates an in-depth understanding of its capabilities, challenges, and ethical and legal implications to handle associated risk factors in real-world application areas. Towards this, we emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise. AI systems may proactively discover vulnerabilities and detect anomalies through pattern recognition, and predictive modeling, significantly enhancing speed and accuracy. Human experts can explain AI-generated decisions to stakeholders, regulators, and end-users in critical situations, ensuring responsibility and accountability, which helps establish trust in AI-driven security solutions. Therefore, in this position paper, we argue that human-AI teaming is worthwhile in cybersecurity, in which human expertise such as intuition, critical thinking, or contextual understanding is combined with AI's computational power to improve overall cyber defenses.Comment: 10 pages, Springe
    corecore